Skip to content

Conversation

thong-phn
Copy link

This pull request enables TensorFlow Lite Micro (TFLM) inference to be offloaded to the HiFi4 DSP.

Key changes include:

  • Porting the TFLM framework to the HiFi4 architecture.
  • Integrating OpenAMP for efficient data communication between the main and DSP cores.
  • Adding a micro_speech sample application to demonstrate a real-world keyword spotting model.

This contribution is part of the Google Summer of Code 2025 project: Running Open-Source ML Models on HiFi4 DSP with Zephyr RTOS.

Copy link

Hello @thong-phn, and thank you very much for your first pull request to the Zephyr project!
Our Continuous Integration pipeline will execute a series of checks on your Pull Request commit messages and code, and you are expected to address any failures by updating the PR. Please take a look at our commit message guidelines to find out how to format your commit messages, and at our contribution workflow to understand how to update your Pull Request. If you haven't already, please make sure to review the project's Contributor Expectations and update (by amending and force-pushing the commits) your pull request if necessary.
If you are stuck or need help please join us on Discord and ask your question there. Additionally, you can escalate the review when applicable. 😊

@thong-phn
Copy link
Author

Dear all, the patch needs additional review, I will reopen when it's ready

@JarmouniA
Copy link
Contributor

You can just mark the PR as draft.

@JarmouniA JarmouniA reopened this Sep 27, 2025
@JarmouniA JarmouniA marked this pull request as draft September 27, 2025 17:08
@thong-phn thong-phn force-pushed the gsoc2025 branch 3 times, most recently from 53269f9 to c4b78de Compare October 3, 2025 05:38
@thong-phn thong-phn changed the title tflite-micro: Add support for HiFi4 DSP samples: tflite-micro: add micro_speech application with OpenAMP on i.MX8MP Oct 3, 2025
@thong-phn thong-phn force-pushed the gsoc2025 branch 7 times, most recently from 89914e8 to a9c3ddb Compare October 4, 2025 09:32
This sample demonstrates TensorFlow Lite Micro speech recognition
capabilities on Zephyr RTOS, specifically designed for i.MX8MP platforms
with OpenAMP support for inter-core communication.

The sample:
- Processes audio input at 16 kHz, S16_LE, mono format.
- Uses resource table for shared memory management between cores.

Board tested: imx8mp_evk_mimx8ml8_adsp

Known limitations:
- If two commands are spoken within 1000 ms,
the second command may not be detected.
- If a command lasts longer than 1000 ms,
it may be detected as two separate commands.

Link to Linux application:
https://github.com/thong-phn/gsoc2025-linux-app

Source of micro_speech model:
https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/examples/micro_speech

Signed-off-by: Thong Phan <[email protected]>
@thong-phn thong-phn force-pushed the gsoc2025 branch 2 times, most recently from bd66044 to 5214cb5 Compare October 8, 2025 16:41
Thong Phan added 2 commits October 9, 2025 01:04
…hyr)

This commit adds OpenAMP Remote Processor Messaging (RPMsg) support
to enable communication between Linux userspace and the Zephyr
micro_speech application running on the remote core.

Implementation details:
- Endpoint name: "audio_pcm"
- Buffer flow: hold/release mechanism for efficient memory management
- Queue depth: 16 messages with 4-byte alignment (K_MSGQ_DEFINE)
- Notify path: mailbox/IPM for inter-processor interrupts
- Linux userspace maps to this endpoint via /dev/ttyRPMSG* device

Signed-off-by: Thong Phan <[email protected]>
Micro-speech application on i.MX8MP HiFi4 DSP with OpenAMP communication.

Documentation includes:
- Audio contract (16kHz, S16_LE, 20ms frames)
- Build and usage instructions
- Integration with Linux userspace application
- Sample output and known limitations

The sample detects speech commands (yes/no/silence/unknown) using a 20KB
neural network, processing audio data sent via RPMsg from Cortex-A cores.

Tested on i.MX8MP EVK with HiFi4 DSP as remote processor.

Signed-off-by: Thong Phan <[email protected]>
Copy link

sonarqubecloud bot commented Oct 8, 2025

@@ -0,0 +1,9 @@
CONFIG_LOG_PRINTK=n
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove all these snippets files.
Keep only imx8mp support since this was tested and mentioned in the README.

@@ -0,0 +1,24 @@
/*
* Copyright (c) 2023 NXP
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copyright should be Copyright 2025 NXP

@@ -0,0 +1,24 @@
/*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove file

@@ -0,0 +1,6 @@
name: nxp-openamp-imx8-adsp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove file

int micro_speech_process_audio(const int16_t *audio_data, size_t audio_data_size);
}

#endif /* MICRO_SPEECH_OPENAMP_MODEL_RUNNER_H_ */ No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a newline to fix compliance check

CONFIG_CPP=y
CONFIG_STD_CPP17=y
CONFIG_TENSORFLOW_LITE_MICRO=y
CONFIG_MAIN_STACK_SIZE=6144 # tflm arena size
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add the comment on top of the config:

# stack size needed to safely initialize
# and run TensorFlow Lite Micro operations
# in the main thread
CONFIG_MAIN_STACK_SIZE=6144

CONFIG_KERNEL_BIN_NAME="micro_speech_openamp"
CONFIG_PRINTK=n
CONFIG_IPM=y
CONFIG_HEAP_MEM_POOL_SIZE=5120 # rpmsg buffers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the same as above:

# rpmsg buffers + OpenAMP overhead + safety margin
CONFIG_HEAP_MEM_POOL_SIZE=5120

I also believe this can be decreased, but didn't check which value is accepted

${OPTIONAL_TFLITE_DIR}/micro/kernels/filter_bank_log.cc
)

if ("${BOARD_ARCH}" STREQUAL "xtensa")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be if("${ARCH}" STREQUAL "xtensa")

#include "audio_preprocessor_int8_model.hpp"
#include "micro_speech_quantized_model.hpp"

#include "transport/rpmsg_transport.h" // For sending results back
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line should be in patch 2 otherwise we get fatal error: transport/rpmsg_transport.h: No such file or directory when compiling.
Each patch should compile independently.

Also, please follow in all files the coding style: Use /** */ for doxygen comments that need to appear in the documentation.

LOG_MODULE_REGISTER(micro_speech_openamp);

#include "inference/model_runner.hpp"
#include "transport/rpmsg_transport.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line should be in patch 2 otherwise we get fatal error: transport/rpmsg_transport.h: No such file or directory when compiling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants